perm filename MOORE.NEW[DIS,DBL] blob sn#219348 filedate 1976-06-12 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	Preceding subsections have discussed many criteria for the system.
C00009 00003	Preceding  subsections have discussed  many criteria  for the system.
C00014 ENDMK
C⊗;
Preceding subsections have discussed many criteria for the system.
Moore and Newell have published some
reasonable design issues for any proposed understanding system, and we shall now
see how our system answers their questions$$
Each point of the taxonomy which they
provide before these questions is covered by the AM  system.$.
Recall that a BEING is the name of the kind of knowledge module representing
one concept, the data structure corresponding to a bunch of facets about that
concept. 

<< Edit this Moore&Newell summary: it's not clear what's going on at all!!>

.BEGIN W(6) 


Representation: Families of BEINGs, simple situation/rules, opaque functions.
	Scope: Each family of BEINGs characterizes one type of knowledge. 
			Each BEING represents one very specialized expert.
			The opaque functions can represent intuition and the real world.
	Grain: Partial knowledge about a topic X is naturally expressed as an incomplete BEING X.
	Multiple representations: Each differently-named part has its own format, so, e.g.,
		examples of an operation can be stored as i/o pairs, the intuition points to an
		opaque function, the recognition section is sit/action productions, the
		algorithms part is a quasi-executable partially-ordered list of things to try.
Action: Most knowledge is stored in BEING-parts in a nearly-executable way; the remainder is
	stored so that the "active" segment can easily use it as it runs.  The place that
	a piece of information is stored is carefully chosen so that it will be evoked
	in almost all the situations in which it is relevant.  The only real action in the
	system is the selective completion of BEINGs parts (occasionally creating a new BEING).
Assimilation: There is no sharp distinction between the internal knowledge and the
	task; the task is really nothing more than to extend the given knowledge while
	maintaining interest and asethetic worth.  The only external entities are the
	user and the simulated physical world. Contact with the first is through a
	simpleminded translation scheme, with the latter through evaluation of opaque
	functions on observable data and examination of the results.
Accomodation: translation of alien messages; inference from (simulated) real-world examples data.
Directionality: The Environment gathers up the relevant knowledge at each step to fill
	in the currently worked-on part of the current BEING, simply by asking that part
	(its archetypical representative), that BEING, and its Tied BEINGs what to do.
	Keep-progressing: at each stage, there will be hundreds or thousands of unfilled-in
		parts, and the system simply chooses the most interesting one to work on.
Efficiency: 
	Interpreter: Will the contents of BEING's parts be compilable, or must they remain
		completely inspectable? One alternative is to provide two versions, one
		fast one for executing and one transparent one for examining. 
		Also provide access to a compiler, to recompile any changed (or new) part.
	Immediacy: There need not be close, rapidifire comunication with a human,
		but whenever communicating with him, time ⊗4will⊗* be important; thus the
		only requirement on speed is placed upon the translation modules, and
		they are fairly simple (due to the clean nature of the mathematical domain).
	Formality: There is a probabilistic belief rating for everything, and a descriptive
		"Justifications" component for all BEINGs for which it is meaningful.
		There are experts who know about Bugs, Debugging, Contradiction, etc.
		Frame problem: when the world changes, make no effort to update everything.
			Whenever a contradiction is encountered, study its origins and
			recompute belief values until it goes away.
Depth of Understanding:  Each BEING is an expert, one of whose duties is to announce his
	own relevance whenever he recognizes it. The specific desire will generally
	indicate which part of the relevant BEING is the one to examine. In case this loses,
	each BEING has a part which (on the basis of how it failed) points to alternatives.
	Access to all implications: The intuitive functions must simulate this ability,
		since they are to be analogic. The BEINGs certainly don't have such access.

.END

Preceding  subsections have discussed  many criteria  for the system.
Moore and Newell have published some reasonable design issues for any
proposed understanding  system, and we  shall now see how  our system
answers  their  questions$$ Each  point  of the  taxonomy  which they
provide before these questions is covered by the AM system.$.  Recall
that a BEING is the name of the kind of knowledge module representing
one concept, the data  structure corresponding to  a bunch of  facets
about that concept.

.BN

λλ  Representation: complex  BEINGs,  simple situation/action  rules,
opaque functions.  Each BEING represents one very specialized expert,
one mathematical  concept.   Partial  knowledge about  a topic  X  is
naturally expressed as an incomplete BEING X.  Each differently-named
facet has its own format.

λλ  Action:  Most knowledge  is  stored in  facets of  concepts  in a
nearly-executable way; the remainder  is stored so that  the "active"
segment can  easily use it as it  runs.  The place that  any piece of
information is stored is carefully chosen  so that it will be  evoked
just in "fitting" situations.  The only real  action in the system is
the  selective completion  of concepts'  facets, with  the occasional
creation of a new concept.

λλ Assimilation: There is  no sharp distinction between the  internal
knowledge and the system's goal; the goal is really nothing more than
to  extend the  given knowledge  while maintaining both  the priority
valueof  the current  task  and  the worth  values  of  newly-created
concepts.  The only external  entities are the user and the simulated
physical world.   Contact with  the first is  through a  simpleminded
translation  scheme, with  the latter  through  evaluation of  opaque
functions on observable data and examination of the results.

λλ Accomodation: this is not exhibited to any high degree by AM.

λλ Directionality: Relevant  knowledge is gathered up at each step to
satisfy the current  task chosen from  the agenda.   This is done  by
rippling  away from  the concept  mentioned in  that task.   At  each
stage, there  will be thousands of unfilled-in facets, and the system
simply chooses the most interesting one to work on.

λλ Efficiency: The contents of the facets exist both in compiled form
and in inspectable form.  Communication with a human user takes place
very rarely, and is very  "clean" when it does  occur, so it isn't  a
bottleneck.   AM  is  an  informal  system, relying  on  a  tentative
calculus of interestingness,  worth numbers, and priority values.  At
the moment, there are no concepts who are experts on Bugs, Debugging,
Contradiction,  etc.   AM  ignores the  frame  problem, and  resolves
paradoxes at contradiction-time.

λλ  Depth of Understanding: Each concept  is an expert. His knowledge
consists of rules  which trigger in  appropriate situations.  AM  has
good  abilities  to add  to  facets  of  existing concepts;  mediocre
abilities to synthesize new concepts; limited abilities to manipulate
and create new heurstic rules.

.E